Don’t get caught out by the unexpected – Steve Lambert – Ep 75

In the last couple of weeks the world has shown how the only predictable thing for many of us who deliver technology is the unpredictability of what we have to deal with, from the massive data breach at British Airways to the catastrophic impacts of hurricanes on both the western and eastern sides of the globe, these are incidents that we should be prepared for, the question is, are we?

If your organisation was impacted by something like a hurricane, causing flooding and power outages how would you react? If you’d suffered a data breach, what would you do? Who would you turn to? What’s the plan?

Planning for these incidents is a crucial part of modern business practice, in some cases it is mandated while in others we appreciate the value of planning and develop continuity and incident response plans. However, for some of us, we don’t have one, or if we do, we are not sure where it is, or whether it works!

So, what if you don’t have a plan, or not sure if your plan has value? Then this episode of the podcast is for you as we look at business continuity planning, with my guest continuity planning consultant Steve Lambert of Biscon Planning.

Steve has many years’ experience in the industry at both Biscon and previous to that working in local government emergency planning. In this episode Steve shares his business planning experience to outline some of the steps that you should be taking to ensure, that in the event of an “incident”, you have a plan to overcome it and not get caught out.

I chat with Steve on a range of topics, why do we need a plan at all? And how continuity planning goes beyond IT. We discuss the types of incidents you need to plan for and compare the differences between operational and enterprise risks.

We look at the evolving incident landscape and how data breach is now a key part of continuity planning. Steve then takes us through some of the steps you need to consider when building a plan, from understanding risk appetite, to impact assessment. We also look at the importance of testing plans and crucially how it’s not only your plans, but those of your suppliers, if they have a critical failure do you know how it impacts you?

We wrap up by looking at some practical steps, including how Biscon can help you with a free review and ways you can highlight the importance of planning across your business.

The importance of incident planning cannot be underestimated, and Steve provides some great tips on how to build and test your plans.

To find out more about Biscon and their services you can visit them at https://www.biscon.co.uk/ and follow them on twitter @bisconplanning

If you’d like to test how you would respond to an incident, you may like to follow this scenario shared recently on the BBC’s website.

Until next time – thanks for listening

Advertisements

Logging and learning your public cloud – Colin Fernandes – Ep 74

In the last of our series looking at the shift to public cloud, we discuss getting the best from your cloud and the value of understanding the behaviour of your cloud infrastructure.

Initially the move to cloud was seen as a way of delivering lower cost infrastructure or test and dev environments. However this is beginning to change, today more than ever this move is driven by agility, flexibility and reducing time to delivery, a focus on outcomes rather than cost and technology. This shift is a positive, technology investments should always be about the outcome and a broader end goal and not technology adoption for technologies sake.

When attempting to achieve these outcomes it’s important that our platforms are performing and delivering in the way we need them too, the ability therefore to log, analyse and gain useful insight into the performance of our estate is a crucial part of making sure our public cloud investment is successful.

On this show I’m joined by Sumo Logic’s Colin Fernandes as we look at public cloud, the value of what it delivers and how an understanding of its performance is crucial to not only help achieve desired outcomes, but to do so while still meeting those ever-critical security and governance requirements.

Colin is a self-proclaimed IT veteran with 32 years’ experience in the industry, starting out at ICL and arriving at Sumo Logic via the likes of IBM and VMware and that longevity in the industry puts Colin in a great position to comment on what he sees in today’s market and how cloud has and is disrupting our use of technology.

We start by looking at the similarities Colin sees in today’s shift to cloud to those early days with VMware. We also discuss how organisations are starting to look at cloud as a way to drive new applications and innovation and how this is as much about a cultural shift as it is technology.

We chat about big shifts in focus, with the adoption of serverless and modern design architectures such as containers and the increasingly pervasive ability to utilise machine learning and analytics. We also explore the problems that come with cloud, particularly those “day one” problems of monitoring, security and compliance and why it’s critical that security be part of the cloud delivery cycle and not an afterthought.

We finish up talking about Sumo Logic and what they bring to the market and how their ability to analyse and use data from their customers can provide them with the valuable insight needed to achieve value from their cloud investment.

This is a great time to find out more about Sumo Logic as this week (Starting 12th September 2018) it’s their annual user conference Illuminate, you can track the event via their live keynote stream and you can find that on www.sumologic.com where you can also find more info about what they do.

If you want to follow up with Colin you can find him on LinkedIn as well as via email cfernandez@sumologic.com

I really enjoyed this chat, with Colin’s experience in the market he provided valuable insight into public cloud and how to get real value from it.

Next time we are looking at the world of incident management, how to plan for it and how to ensure a technology disaster or data breach doesn’t catch you out.

Until then, thanks for listening.

Managing multiple clouds – Joe Kinsella – ep73

This show was recorded pre the announcement on August 27th, 2018 of CloudHealth Technologies acquisition by VMware.

This is the 3rd in our series looking at the move to public cloud, the challenges involved and some of the tips and technologies that can help you to overcome them. In this episode we look at perhaps the biggest challenge facing most organisations moving to public cloud, the issues of multi-cloud.

A few weeks ago I published a post about multi-cloud becoming the technology industries holy grail (Tech and the holy multi cloud grail) as they look at ways to extract the complexity from multi-cloud environments and allow us to build solutions that encompass our infrastructure be it on-prem, in a co-lo or a public hyperscale provider. The benefits of multi-cloud deployments are many and will be a major part of our future use of cloud.

On this weeks show we look at those issues surrounding multi-cloud and particularly how to manage it, maintain cost efficiency, govern and ensure security of our cloud based assets. To discuss this I’m joined by Joe Kinsella CTO and Founder of CloudHealth Tech, a company that have built a platform to pull together information from numerous environments, consolidate it into one place and allow you to make informed, proactive decisions to ensure you use your technology in the best way you can.

During the episode we explore some wide-ranging topics, we look at why complexity is an issue, how multi-cloud was initially “stumbled upon” but is now becoming a chosen strategy. We ask why don’t we expect cloud to be complex when much of what we do in our datacentres is very complicated? Joe also confesses that 3-4 years ago he was predicting the death of the on-prem DC and why he has revaluated that with hybrid becoming the deployment reality.

We also discuss the traits for successful multi-cloud deployment and why a cloud first strategy isn’t about everything in the cloud, but more about can we use cloud? should we use cloud?

We wrap up discussing the CloudHealth Tech platform, what it does and how it helps to manage a multi-cloud environment by pulling together clouds, on-prem and automation platforms, connecting all the information to provide the business insights needed for proactive decision making. Finally, we look at the maturity of cloud management and how it needs to move beyond cost control and embrace security and governance as the evolution of multi-cloud management.

Joe gives some great insight and CloudHealth Technologies deliver a very powerful platform, so powerful that VMware saw fit to acquire them.

To find out more about CloudHealth Tech you can visit their website www.cloudhealthtech.com

Follow them on twitter @cloudhealthtech

You can find out more from Joe on twitter @joekinsella, his CloudHealth Tech blog www.cloudhealthtech.com/blog and finally his own blog hightechinthehub.com.

Enjoy and thanks for listening.

Building a modern data platform – exploiting the cloud

No modern data platform would be complete if we didn’t talk about the use of public cloud. Public cloud can play a very important part in building a modern data platform and provide us with capabilities we couldn’t get any other way.

In this part of our series we look at the benefits of public cloud, the challenges of adoption and how to overcome them and ensure we can embrace cloud as part of our platform.

Why is public cloud useful for our data?

If we look at the challenges normally associated with traditional approaches to data storage, scale, flexibility, data movement, commercials, then it quickly becomes clear how cloud can be valuable.

While these challenges are common in traditional approaches, these are the areas were public cloud is strongest. It gives us scale that is almost infinite, a consumption model were we pay for what we need as we need it and of course flexibility, the ability to take our data and do interesting things with it once it’s within the public cloud. From analytics and AI to the more mundane backup and DR, flexbility is one of the most compelling reasons for considering public cloud at all.

While the benefits are clear, why are more organisations not falling over themselves to move to cloud?

What’s it lacking?

It’s not about what public cloud can do, it is more about what it doesn’t that tends to stop organisations wholeheartedly embracing it when it comes to data assets.

As we’ve worked through the different areas of building a modern data platform our approach to data is about more than storage, it’s insight, protection, availability, security, privacy and these are things not normally associated with native cloud storage and we don’t want our move to cloud to mean we lose all of those capabilities or have to implement and learn a new set of tools to deliver them.

Of course there is also the “data gravity” problem, we can’t have our cloud based data siloed away from the rest of our platform, it has to be part of it, we need to be able to move data in to the cloud, out again, between cloud providers, all while retaining enterprise control and management.

So how do we overcome these challenges?

How to make the cloud feel like the enterprise?

When it comes to the modern data platforms, NetApp have developed into an ideal partner for helping to integrate public cloud storage. If we look back at part one of this series (Building a modern data platform-the storage) we discussed NetApp’s data services which are built into their ONTAP operating system making it the cornerstone of their data fabric strategy. What makes ONTAP that cornerstone is, as a piece of software, the ability for it to be installed anywhere, which today also means public cloud.

Taking ONTAP and its data services into the cloud provides us with massive advantages, it allows us to deliver enterprise storage efficiencies, performance guarantees and the ability to use the enterprise tools we have made a key part of our platform with our cloud based data as well.

NetApp has two ways to deploy ONTAP into public cloud. It can be installed as Cloud Volumes ONTAP, a full ONTAP deployment on top of native cloud storage, providing all of the same enterprise data services we have on-prem and extend them into the cloud and seamlessly integrate them with our on-prem data stores.

An alternative and even more straightforward approach, is having ONTAP delivered as a native service, no ONTAP deployment or experience necessary. You order your service enter a size, performance characteristics and away you go, with no concern at all with underlying infrastructure, how it works and how it’s managed. You are provided with enterprise class storage with data protection, storage efficiencies and performance service levels previously unheard of in native cloud storage, in seconds.

It’s not a Strategy without integration

While adding enterprise capabilities are great, the idea of a modern data platform relies on having our data in the location we need it, when we need it while maintaining management and control. This is where the use of NetApp’s technology provides real advantage. The use of ONTAP as a consistent endpoint provides the platform for integration, allowing us to use the same tools, policies and procedures at the core of our data platform and extend this to our data in the public cloud.

NetApp’s SnapMirror provides us with a data movement engine so we can simply move data in and out of and between clouds. Replicating data in this way means that while our on-prem version can be the authoritative copy, it doesn’t have to be the only one, replicating a copy of our data to a location for a one off task, which once completed can then be destroyed, is a powerful capability and an important element of simplifying the extension of our platform into the cloud.

Summary

Throughout this series we have asked the question “do we have to use technology X to deliver this service?” the reality is of course no, but NetApp are a key element of our modern data platforms because of this cloud integration capability, the option to provide consistent data services across multiple locations is extremely powerful allowing us to take advantage of cloud while maintaining our enterprise controls.

While I’ve not seen any other data services provider coming close to what NetApp are doing in this space, the important thing in your design strategy, if it is to include public cloud, is ensure you have appropriate access to data services, integration, management and control, it’s crucial that you don’t put data at risk or diminish the capabilities of your data platform by using cloud.

This is part 6 in a series of posts on building a modern data platform, you can find the introduction and other parts of this series here.

Assessing the risk in public cloud – Darron Gibbard – Ep72

As the desire to integrate public cloud into our organisations IT continues to grow, the need to ensure we maintain control and security of our key assets is a challenge but one that we need to overcome if we are going to use cloud as a fundamental part of our future IT infrastructure.

The importance of security and reducing our vulnerabilities is not, of course, unique to using public cloud, it’s a key part of any organisations IT and data strategy. However, the move to public cloud does introduce some different challenges with many of our services and data now sitting well outside the protective walls of our datacentre. This means that if our risks and vulnerabilities go unidentified and unmanaged it can open us up to the potential of major and wide-reaching security breaches.

This weeks Tech Interviews is the second in our series looking at what organisations need to consider as they make the move to public cloud. In this episode we focus on risk, how to assess it, gain visibility into our systems regardless of location and how to mitigate the risks that our modern infrastructure may come across.

To help discuss the topic of risk management in the cloud, I’m joined by Darron Gibbard. Darron is the Managing Director for EMEA North and Chief Technology Security Officer for Qualys with 25 years’ experience in the enterprise security, risk and compliance industry, he is well placed too discuss the challenges of public cloud.

In this episode we look at the vulnerabilities that a move to cloud can create as our data and services are no longer the preserve of the data centre. We discuss whether the cloud is as high a risk as we may be led to believe and why a lack of visibility to risk and threats is more of a problem than any inherent risk in a cloud platform.

Darron shares some insight into building a risk-based approach to using cloud and how to assess risk and why understanding the impact of a vulnerability is just, if not more useful that working out the likelihood of a cloud based “event”.

We wrap up with a discussion around Qaulys’s 5 principles of security and their approach to transparent orchestration ensuring that all this additional information we can gather can be used effectively.

The challenges presented around vulnerability and risk management when we move to public cloud shouldn’t be ignored, but it was refreshing to hear Darron presenting a balanced view and discussing that the cloud is no riskier than any enterprise environment when managed correctly.

Qualys are an interesting company with a great portfolio of tools, including a number that are free to use and can assist companies of all sizes to reduce their risk exposure both on-prem and in the cloud, to find out more about Qualys you can visit www.qualys.com.

You can also contact Darron by email dgibbard@qualys.com or connect with him on LinkedIn.

Thanks for listening.

For the first show in this series then check out – Optimising the public cloud – Andrew Hillier – Ep71